130 research outputs found
Content Monetization on Twitter: A Study of Platform Documentation and Transatlantic Legal Implications
Social media platforms have long been considered as public squares, democratic spaces of public interest, where communities can gather and discuss matters relevant for every member of society. Yet new actors who are relevant for the space of online speech continue to emerge. A more recent category of such stakeholders is reflected by content creators. Also known as influencers, content creators are a new iteration of the gig economy engaging in the generation and monetization of user content. So far, research has very much focused on monetization from the perspective of content creators. In this working paper, I take the perspective of platforms in trying to better understand monetization policies. To this end, I focus on Twitter as an example of a platform with a blossoming content monetization strategy, which has also been seriously changing in the light of Elon Musks’s plans for the future of the platform. As monetization is becoming more and more complex, and it entails an increasing amount of transactions, it is important to understand what kind of frameworks platforms develop around their monetization products. The goal of this paper is two-fold: to understand content monetization on Twitter according to the platform’s own rules, policies and practices; and to highlight the complex legal framework that applies to content monetization from a transatlantic perspective. The paper is structured as follows. Section 2 offers a general overview on content monetization based on existing literature and taxonomies. Section 3 addresses Twitter monetization as a case study, and presents a methodology for the selection of platform documentation. Section 4 outlines some regulatory questions relating to monetization from the perspective of US and EU law, highlighting some essential concerns arising out of the increasing complexity of the relevant legal frameworks. Section 5 provides a brief discussion of the findings and concludes
A Multimodal Analysis of Influencer Content on Twitter
Influencer marketing involves a wide range of strategies in which brands
collaborate with popular content creators (i.e., influencers) to leverage their
reach, trust, and impact on their audience to promote and endorse products or
services. Because followers of influencers are more likely to buy a product
after receiving an authentic product endorsement rather than an explicit direct
product promotion, the line between personal opinions and commercial content
promotion is frequently blurred. This makes automatic detection of regulatory
compliance breaches related to influencer advertising (e.g., misleading
advertising or hidden sponsorships) particularly difficult. In this work, we
(1) introduce a new Twitter (now X) dataset consisting of 15,998 influencer
posts mapped into commercial and non-commercial categories for assisting in the
automatic detection of commercial influencer content; (2) experiment with an
extensive set of predictive models that combine text and visual information
showing that our proposed cross-attention approach outperforms state-of-the-art
multimodal models; and (3) conduct a thorough analysis of strengths and
limitations of our models. We show that multimodal modeling is useful for
identifying commercial posts, reducing the amount of false positives, and
capturing relevant context that aids in the discovery of undisclosed commercial
posts.Comment: Accepted at AACL 202
Clout Chasing for the Sake of Content Monetization: Gaming Algorithmic Architectures with Self-Moderation Strategies
This short discussion paper addresses how controversy is monetized online by reflecting on a new iteration of the shock value in media production, identified on social media as the ‘clout chasing’ phenomenon. We first exemplify controversial behavior, and subsequently proceed to defining clout chasing, which we discuss this concept in relation to existing frameworks for the understanding of controversy on social media. We then outline what clout chasing entails as a content monetization strategy, and address the risks associated with this approach. In doing so, we introduce the concept of ‘content self-moderation’, which encompasses how creators use content moderation as a way to hedge monetization risks arising out of their reliance on controversy for economic growth. This concept is discussed in the context of the automated content governance entailed by algorithmic platform architectures, to contribute to existing scholarship on platform governance
DIGITAL INFLUENCERS, MONETIZATION MODELS AND PLATFORMS AS TRANSACTIONAL SPACES
This paper aims to discuss the impact of digital influencers’ content monetization on social media in the context of platform governance. For achieving this objective, it characterizes the Monetization Supply Chain and the different monetization models as ad revenue, on-platform influencer marketing; subscription, tokenization, crowdfunding; direct selling; creator funds; besides the traditional influencer marketing. It also presents preliminary analyses of a dataset of posts by 400 influencers from four countries: Brazil, Germany, the Netherlands and the United States of America to understand how content creators from different countries are framing sponsored content
Digital Influencers, Monetization Models and Platforms as Transactional Spaces
This paper aims to discuss the impact of digital influencers’ content monetization on social media in the context of platform governance. For achieving this objective, it characterizes the Monetization Supply Chain and the different monetization models as ad revenue, on-platform influencer marketing; subscription, tokenization, crowdfunding; direct selling; creator funds; besides the traditional influencer marketing. It also presents preliminary analyses of a dataset of posts by 400 influencers from four countries: Brazil, Germany, the Netherlands and the United States of America to understand how content creators from different countries are framing sponsored content
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media
Regulatory bodies worldwide are intensifying their efforts to ensure
transparency in influencer marketing on social media through instruments like
the Unfair Commercial Practices Directive (UCPD) in the European Union, or
Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations
has proven to be highly problematic due to the sheer scale of the influencer
market. The task of automatically detecting sponsored content aims to enable
the monitoring and enforcement of such regulations at scale. Current research
in this field primarily frames this problem as a machine learning task,
focusing on developing models that achieve high classification performance in
detecting ads. These machine learning tasks rely on human data annotation to
provide ground truth information. However, agreement between annotators is
often low, leading to inconsistent labels that hinder the reliability of
models. To improve annotation accuracy and, thus, the detection of sponsored
content, we propose using chatGPT to augment the annotation process with
phrases identified as relevant features and brief explanations. Our experiments
show that this approach consistently improves inter-annotator agreement and
annotation accuracy. Additionally, our survey of user experience in the
annotation task indicates that the explanations improve the annotators'
confidence and streamline the process. Our proposed methods can ultimately lead
to more transparency and alignment with regulatory requirements in sponsored
content detection.Comment: Accepted to The World Conference on eXplainable Artificial
Intelligence, Lisbon, Portugal, July 202
A Multimodal Analysis of Influencer Content on Twitter
Influencer marketing involves a wide range of strategies in which brands collaborate with popular content creators (i.e., influencers) to leverage their reach, trust, and impact on their audience to promote and endorse products or services. Because followers of influencers are more likely to buy a product after receiving an authentic product endorsement rather than an explicit direct product promotion, the line between personal opinions and commercial content promotion is frequently blurred. This makes automatic detection of regulatory compliance breaches related to influencer advertising (e.g., misleading advertising or hidden sponsorships) particularly difficult. In this work, we (1) introduce a new Twitter (now X) dataset consisting of 15,998 influencer posts mapped into commercial and non-commercial categories for assisting in the automatic detection of commercial influencer content; (2) experiment with an extensive set of predictive models that combine text and visual information showing that our proposed cross-attention approach outperforms state-of-the-art multimodal models; and (3) conduct a thorough analysis of strengths and limitations of our models. We show that multimodal modeling is useful for identifying commercial posts, reducing the amount of false positives, and capturing relevant context that aids in the discovery of undisclosed commercial posts
Return of the AI: An analysis of legal research on Artificial Intelligence using topic modeling
AI research finds itself in the third boom of its history, and in recent years, AI-related themes have gained considerable popularity in new disciplines, such as law. This paper explores what legal research on AI constitutes of and how it has evolved, while addressing the issues of information retrieval and research duplication. Using Latent Dirichlet Allocation (LDA) topic modeling on a dataset of 3931 journal articles, we explore three questions: (a) Which topics within legal research on AI can be distinguished? (b) When were these topics addressed? and (c) Can similar papers be detected? The topic modeling results in a total of 32 meaningful topics. Additionally, it is found that legal research on AI drastically increased as of 2016, with topics becoming more granular and diverse over time. Finally, a comparison of the similarity assessments produced by the algorithm and a human expert suggest that the assessments often coincide. The results provide insights into how a legal research on AI has evolved over time, and support for the development of machine learning and information retrieval tools like LDA that assist in structuring large document collections and identifying relevant articles.</p
- …